acm ieee international conference
Cognitive Trust in HRI: "Pay Attention to Me and I'll Trust You Even if You are Wrong"
Manor, Adi, Cohen, Dan, Keidar, Ziv, Parush, Avi, Erel, Hadas
Cognitive trust and the belief that a robot is capable of accurately performing tasks, are recognized as central factors in fostering high-quality human-robot interactions. It is well established that performance factors such as the robot's competence and its reliability shape cognitive trust. Recent studies suggest that affective factors, such as robotic attentiveness, also play a role in building cognitive trust. This work explores the interplay between these two factors that shape cognitive trust. Specifically, we evaluated whether different combinations of robotic competence and attentiveness introduce a compensatory mechanism, where one factor compensates for the lack of the other. In the experiment, participants performed a search task with a robotic dog in a 2x2 experimental design that included two factors: competence (high or low) and attentiveness (high or low). The results revealed that high attentiveness can compensate for low competence. Participants who collaborated with a highly attentive robot that performed poorly reported trust levels comparable to those working with a highly competent robot. When the robot did not demonstrate attentiveness, low competence resulted in a substantial decrease in cognitive trust. The findings indicate that building cognitive trust in human-robot interaction may be more complex than previously believed, involving emotional processes that are typically overlooked. We highlight an affective compensatory mechanism that adds a layer to consider alongside traditional competence-based models of cognitive trust.
- North America > United States > District of Columbia > Washington (0.05)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- Asia > Middle East > Israel > Haifa District > Haifa (0.04)
- Asia > China > Shandong Province > Qingdao (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Robotic capabilities framework: A boundary object and intermediate-level knowledge artifact for co-designing robotic processes
Ianniello, Alessandro, Murray-Rust, Dave, Muscolo, Sara, Siebinga, Olger, Mol, Nicky, Zatyagov, Denis, Verhoef, Eva, Forster, Deborah, Abbink, David
As robots become more adaptable, responsive, and capable of interacting with humans, the design of effective human-robot collaboration becomes critical. Yet, this design process is typically led by monodisciplinary approaches, often overlooking interdisciplinary knowledge and the experiential knowledge of workers who will ultimately share tasks with these systems. To address this gap, we introduce the robotic capabilities framework, a vocabulary that enables transdisciplinary collaborations to meaningfully shape the future of work when robotic systems are integrated into the workplace. Rather than focusing on the internal workings of robots, the framework centers discussion on high-level capabilities, supporting dialogue around which elements of a task should remain human-led and which can be delegated to robots. We developed the framework through reflexive and iterative processes, and applied it in two distinct settings: by engaging roboticists in describing existing commercial robots using its vocabulary, and through a design activity with students working on robotics-related projects. The framework emerges as an intermediate-level knowledge artifact and a boundary object that bridges technical and experiential domains, guiding designers, empowering workers, and contributing to more just and collaborative futures of work.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Europe > Netherlands > South Holland > Delft (0.04)
- (15 more...)
- Research Report (1.00)
- Instructional Material (0.93)
The Role of Consequential and Functional Sound in Human-Robot Interaction: Toward Audio Augmented Reality Interfaces
Smith, Aliyah, Kennedy, Monroe III
Abstract--As robots become increasingly integrated into everyday environments, understanding how they communicate with humans is critical. Sound offers a powerful channel for interaction, encompassing both operational noises and intentionally designed auditory cues. In this study, we examined the effects of consequential and functional sounds on human perception and behavior, including a novel exploration of spatial sound through localization and handover tasks. Results show that consequential sounds of the Kinova Gen3 manipulator did not negatively affect perceptions, spatial localization is highly accurate for lateral cues but declines for frontal cues, and spatial sounds can simultaneously convey task-relevant information while promoting warmth and reducing discomfort. These findings highlight the potential of functional and transformative auditory design to enhance human-robot collaboration and inform future sound-based interaction strategies. UDIO Augmented Reality remains a comparatively un-derexplored domain within the broader field of Augmented Reality (AR) research [1]. While recent advancements in AR technologies have spurred extensive investigation into visual augmentation--where virtual objects are seamlessly integrated into the physical environment--research on auditory augmentation has lagged behind.
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
QuickLAP: Quick Language-Action Preference Learning for Autonomous Driving Agents
Nader, Jordan Abi, Lee, David, Dennler, Nathaniel, Bobu, Andreea
Robots must learn from both what people do and what they say, but either modality alone is often incomplete: physical corrections are grounded but ambiguous in intent, while language expresses high-level goals but lacks physical grounding. We introduce QuickLAP: Quick Language-Action Preference learning, a Bayesian framework that fuses physical and language feedback to infer reward functions in real time. Our key insight is to treat language as a probabilistic observation over the user's latent preferences, clarifying which reward features matter and how physical corrections should be interpreted. QuickLAP uses Large Language Models (LLMs) to extract reward feature attention masks and preference shifts from free-form utterances, which it integrates with physical feedback in a closed-form update rule. This enables fast, real-time, and robust reward learning that handles ambiguous feedback. In a semi-autonomous driving simulator, QuickLAP reduces reward learning error by over 70% compared to physical-only and heuristic multimodal baselines. A 15-participant user study further validates our approach: participants found QuickLAP significantly more understandable and collaborative, and preferred its learned behavior over baselines. Code is available at https://github.com/MIT-CLEAR-Lab/QuickLAP.
- Asia > Middle East > Jordan (0.40)
- North America > United States > California > San Francisco County > San Francisco (0.28)
- North America > United States > New York > New York County > New York City (0.14)
- (19 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (0.89)
- Transportation > Ground > Road (0.84)
- Automobiles & Trucks (0.84)
- Information Technology > Robotics & Automation (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.88)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (0.85)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.66)
OpenRoboCare: A Multimodal Multi-Task Expert Demonstration Dataset for Robot Caregiving
Liang, Xiaoyu, Liu, Ziang, Lin, Kelvin, Gu, Edward, Ye, Ruolin, Nguyen, Tam, Hsu, Cynthia, Wu, Zhanxin, Yang, Xiaoman, Cheung, Christy Sum Yu, Soh, Harold, Dimitropoulou, Katherine, Bhattacharjee, Tapomayukh
We present OpenRoboCare, a multimodal dataset for robot caregiving, capturing expert occupational therapist demonstrations of Activities of Daily Living (ADLs). Caregiving tasks involve complex physical human-robot interactions, requiring precise perception under occlusions, safe physical contact, and long-horizon planning. While recent advances in robot learning from demonstrations have shown promise, there is a lack of a large-scale, diverse, and expert-driven dataset that captures real-world caregiving routines. To address this gap, we collect data from 21 occupational therapists performing 15 ADL tasks on two manikins. The dataset spans five modalities: RGB-D video, pose tracking, eye-gaze tracking, task and action annotations, and tactile sensing, providing rich multimodal insights into caregiver movement, attention, force application, and task execution strategies. We further analyze expert caregiving principles and strategies, offering insights to improve robot efficiency and task feasibility. Additionally, our evaluations demonstrate that OpenRoboCare presents challenges for state-of-the-art robot perception and human activity recognition methods, both critical for developing safe and adaptive assistive robots, highlighting the value of our contribution. See our website for additional visualizations: https://emprise.cs.cornell.edu/robo-care/.
- North America > United States > Massachusetts > Middlesex County > Lowell (0.14)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- Europe > Greece (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
A short methodological review on social robot navigation benchmarking
Chhetri, Pranup, Torrejon, Alejandro, Eslava, Sergio, Manso, Luis J.
Social Robot Navigation is the skill that allows robots to move efficiently in human-populated environments while ensuring safety, comfort, and trust. Unlike other areas of research, the scientific community has not yet achieved an agreement on how Social Robot Navigation should be benchmarked. This is notably important, as the lack of a de facto standard to benchmark Social Robot Navigation can hinder the progress of the field and may lead to contradicting conclusions. Motivated by this gap, we contribute with a short review focused exclusively on benchmarking trends in the period from January 2020 to July 2025. Of the 130 papers identified by our search using IEEE Xplore, we analysed the 85 papers that met the criteria of the review. This review addresses the metrics used in the literature for benchmarking purposes, the algorithms employed in such benchmarks, the use of human surveys for benchmarking, and how conclusions are drawn from the benchmarking results, when applicable.
- North America > United States > Michigan > Wayne County > Detroit (0.05)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- (33 more...)
- Transportation (0.46)
- Health & Medicine (0.46)
Trust and Human Autonomy after Cobot Failures: Communication is Key for Industry 5.0
Glawe, Felix, Kremer, Laura, Vervier, Luisa, Brauner, Philipp, Ziefle, Martina
Collaborative robots (cobots) are a core technology of Industry 4.0. Industry 4.0 uses cyber-physical systems, IoT and smart automation to improve efficiency and data-driven decision-making. Cobots, as cyber-physical systems, enable the introduction of lightweight automation to smaller companies through their flexibility, low cost and ability to work alongside humans, while keeping humans and their skills in the loop. Industry 5.0, the evolution of Industry 4.0, places the worker at the centre of its principles: The physical and mental well-being of the worker is the main goal of new technology design, not just productivity, efficiency and safety standards. Within this concept, human trust in cobots and human autonomy are important. While trust is essential for effective and smooth interaction, the workers' perception of autonomy is key to intrinsic motivation and overall well-being. As failures are an inevitable part of technological systems, this study aims to answer the question of how system failures affect trust in cobots as well as human autonomy, and how they can be recovered afterwards. Therefore, a VR experiment (n = 39) was set up to investigate the influence of a cobot failure and its severity on human autonomy and trust in the cobot. Furthermore, the influence of transparent communication about the failure and next steps was investigated. The results show that both trust and autonomy suffer after cobot failures, with the severity of the failure having a stronger negative impact on trust, but not on autonomy. Both trust and autonomy can be partially restored by transparent communication.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > New York (0.05)
- (10 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Education (1.00)
- Information Technology (0.67)
- Health & Medicine > Therapeutic Area (0.46)
Brain-Robot Interface for Exercise Mimicry
Bettosi, Carl, Nault, Emilyann, Baillie, Lynne, Garschall, Markus, Romeo, Marta, Wais-Zechmann, Beatrix, Binderlehner, Nicole, Georgiou, Theodoros
For social robots to maintain long-term engagement as exercise instructors, rapport-building is essential. Motor mimicry--imitating one's physical actions--during social interaction has long been recognized as a powerful tool for fostering rapport, and it is widely used in rehabilitation exercises where patients mirror a physiotherapist or video demonstration. We developed a novel Brain-Robot Interface (BRI) that allows a social robot instructor to mimic a patient's exercise movements in real-time, using mental commands derived from the patient's intention. The system was evaluated in an exploratory study with 14 participants (3 physiotherapists and 11 hemiparetic patients recovering from stroke or other injuries). We found our system successfully demonstrated exercise mimicry in 12 sessions; however, accuracy varied. Participants had positive perceptions of the robot instructor, with high trust and acceptance levels, which were not affected by the introduction of BRI technology.
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.68)
A Matter of Height: The Impact of a Robotic Object on Human Compliance
Faber, Michael, Grishko, Andrey, Waksberg, Julian, Pardo, David, Leivy, Tomer, Hazan, Yuval, Talmansky, Emanuel, Megidish, Benny, Erel, Hadas
Robots come in various forms and have different characteristics that may shape the interaction with them. In human-human interactions, height is a characteristic that shapes human dynamics, with taller people typically perceived as more persuasive. In this work, we aspired to evaluate if the same impact replicates in a human-robot interaction and specifically with a highly non-humanoid robotic object. The robot was designed with modules that could be easily added or removed, allowing us to change its height without altering other design features. To test the impact of the robot's height, we evaluated participants' compliance with its request to volunteer to perform a tedious task. In the experiment, participants performed a cognitive task on a computer, which was framed as the main experiment. When done, they were informed that the experiment was completed. While waiting to receive their credits, the robotic object, designed as a mobile robotic service table, entered the room, carrying a tablet that invited participants to complete a 300-question questionnaire voluntarily. We compared participants' compliance in two conditions: A Short robot composed of two modules and 95cm in height and a Tall robot consisting of three modules and 132cm in height. Our findings revealed higher compliance with the Short robot's request, demonstrating an opposite pattern to human dynamics. We conclude that while height has a substantial social impact on human-robot interactions, it follows a unique pattern of influence. Our findings suggest that designers cannot simply adopt and implement elements from human social dynamics to robots without testing them first.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Italy > Campania > Naples (0.04)
- Asia > Japan > Honshū > Kantō > Ibaraki Prefecture > Tsukuba (0.04)
The Influence of Facial Features on the Perceived Trustworthiness of a Social Robot
Barrow, Benedict, Moore, Roger K.
Abstract-- Trust and the perception of trustworthiness play an important role in decision-making and our behaviour towards others, and this is true not only of human-human interactions but also of human-robot interactions. While significant advances have been made in recent years in the field of social robotics, there is still some way to go before we fully understand the factors that influence human trust in robots. This paper presents the results of a study into the first impressions created by a social robot's facial features, based on the hypothesis that a'babyface' engenders trust. By manipulating the back-projected face of a Furhat robot, the study confirms that eye shape and size have a significant impact on the perception of trustworthiness. The work thus contributes to an understanding of the design choices that need to be made when developing social robots so as to optimise the effectiveness of human-robot interaction. Trust is a fundamental building block for any society to function properly.
- Europe > United Kingdom > England > South Yorkshire > Sheffield (0.04)
- Oceania > New Zealand > South Island > Canterbury Region > Christchurch (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (3 more...)